11 research outputs found

    Scientific Explanation and the Philosophy of Persuasion: Understanding Rhetoric through Scientific Principles and Mechanisms

    Get PDF
    This thesis explores the issue of whether Aristotle\u27s Rhetoric is consistent with the principles and tools of contemporary science. The approach is to review Aristotle\u27s Rhetoric (along with several modernizing ideas) in light of explanatory mechanisms from psychology, biology, cognitive science and neuroscience. The thesis begins by reviewing Aristotle\u27s Rhetoric and modern rhetorical contributions from Chaim Perelman and Christopher Tindale. A discussion of several psychological principles of reasoning and their relevance to philosophical rhetoric follows. Next, a computational cognitive science framework on emotions and cognition and its applicability to rhetoric is provided, followed by a discussion from principles of evolutionary biology on language evolution and morality and their relevance to rhetoric. The thesis concludes with a brief discussion of rhetorical ideas relative to the neuroanatomy of deductive and inductive reasoning and relative to a view of morality founded on brain neurochemistry

    Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

    Full text link
    The rapid advancement of artificial intelligence (AI) systems suggests that artificial general intelligence (AGI) systems may soon arrive. Many researchers are concerned that AIs and AGIs will harm humans via intentional misuse (AI-misuse) or through accidents (AI-accidents). In respect of AI-accidents, there is an increasing effort focused on developing algorithms and paradigms that ensure AI systems are aligned to what humans intend, e.g. AI systems that yield actions or recommendations that humans might judge as consistent with their intentions and goals. Here we argue that alignment to human intent is insufficient for safe AI systems and that preservation of long-term agency of humans may be a more robust standard, and one that needs to be separated explicitly and a priori during optimization. We argue that AI systems can reshape human intention and discuss the lack of biological and psychological mechanisms that protect humans from loss of agency. We provide the first formal definition of agency-preserving AI-human interactions which focuses on forward-looking agency evaluations and argue that AI systems - not humans - must be increasingly tasked with making these evaluations. We show how agency loss can occur in simple environments containing embedded agents that use temporal-difference learning to make action recommendations. Finally, we propose a new area of research called "agency foundations" and pose four initial topics designed to improve our understanding of agency in AI-human interactions: benevolent game theory, algorithmic foundations of human rights, mechanistic interpretability of agency representation in neural-networks and reinforcement learning from internal states

    Characterizing single neuron activity patterns and dynamics using multi-scale spontaneous neuronal activity recordings of cat and mouse cortex

    No full text
    Throughout most of the 20th century the brain has been studied as a reflexive system with ever improving recording methods being applied within a variety of sensory and behavioural paradigms. Yet the brains of most animals (and all mammals) are spontaneously active with incoming sensory stimuli modulating rather than driving neural activity. The aim of this thesis is to characterize spontaneous neural activity across multiple temporal and spatial scales relying on biophysical simulations, experiments and analysis of recordings from the visual cortex of cats and dorsal cortex and thalamus of mouse. Biophysically detailed simulations yielded novel datasets for testing spike sorting algorithms which are critical for isolating single neuron activity. Sorting algorithms tested provided low error rates with operator skill being as important as sorting suite. Simulated datasets have similar characteristics to in vivo acquired data and ongoing larger-scope efforts are proposed for developing the next generation of spike sorting algorithms and extracellular probes. Single neuron spontaneous activity was correlated to dorsal cortex neural activity in mice. Spike-triggered-maps revealed that spontaneously firing cortical neurons were co-activated with homotopic and mono-synaptically connected cortical areas, whereas thalamic neurons co-activated with more diversely connected areas. Both bursting and tonic firing modes yielded similar maps and the time courses of spike-triggered-maps revealed distinct patterns suggesting such dynamics may constitute intrinsic single neuron properties. The mapping technique extends previous work to further link spontaneous neural activity across temporal and spatial scales and suggests additional avenues of investigation. Synchronized state cat visual and mouse sensory cortex electrophysiological recordings revealed that spontaneously occurring activity UP-state transitions fall into stereotyped classes of events that can be grouped. Single visual cortex neurons active during UP-state transitions fire in a partially preserved order extending previous findings on high firing rate neurons in rat somatosensory and auditory cortex. The firing order for many neurons changes over periods longer than 30-minutes suggesting a complex non-stationary temporal neural code may underlie spontaneous and stimulus evoked neural activity. This thesis shows that ongoing spontaneous brain activity contains substantial structure that can be used to further our understanding of brain function.Medicine, Faculty ofGraduat

    BioNet: A Python interface to NEURON for modeling large-scale networks.

    No full text
    There is a significant interest in the neuroscience community in the development of large-scale network models that would integrate diverse sets of experimental data to help elucidate mechanisms underlying neuronal activity and computations. Although powerful numerical simulators (e.g., NEURON, NEST) exist, data-driven large-scale modeling remains challenging due to difficulties involved in setting up and running network simulations. We developed a high-level application programming interface (API) in Python that facilitates building large-scale biophysically detailed networks and simulating them with NEURON on parallel computer architecture. This tool, termed "BioNet", is designed to support a modular workflow whereby the description of a constructed model is saved as files that could be subsequently loaded for further refinement and/or simulation. The API supports both NEURON's built-in as well as user-defined models of cells and synapses. It is capable of simulating a variety of observables directly supported by NEURON (e.g., spikes, membrane voltage, intracellular [Ca++]), as well as plugging in modules for computing additional observables (e.g. extracellular potential). The high-level API platform obviates the time-consuming development of custom code for implementing individual models, and enables easy model sharing via standardized files. This tool will help refocus neuroscientists on addressing outstanding scientific questions rather than developing narrow-purpose modeling code

    Application example: Model of the layer 4 in mouse V1.

    No full text
    <p>(<b>A</b>) The <i>in silico</i> study [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0201630#pone.0201630.ref009" target="_blank">9</a>] mimicked <i>in vivo</i> visual physiology experiments (bottom), where a mouse watches visual stimuli such as, e.g., drifting gratings, while the activity of neurons in its cortex are recorded. (Center) The top view of the cortical surface, with boundaries of cortical areas delineated (VISp is V1). The inner boundary encloses part of the tissue that was modeled using biophysically detailed cells, whereas the tissue between the inner and outer circles was modeled using the simplified LIF cells. (Top) The 3D visualization of the layer 4 model (only 10% of cells are shown for clarity). (<b>B</b>) Example of synaptic innervation of the biophysically detailed cell models of each type. Synapses (depicted as spheres) are color coded according to their source cell type. (<b>C</b>) Rastergrams of the external inputs: (Top) “background” input (BKG, khaki) that switches between “ON” to “OFF” states, loosely representing different brain states; (Bottom) LGN input (green) corresponding to the visual response to 0.5 second gray screen (gray line) followed by 2.5 second drifting grating (black line). (<b>D</b>) The connection matrix showing the peak conductance strength for connections between each pair of cell types. (<b>E</b>) Simulation output: (Top) spike raster in the biophysical “core”. The node_ids are ordered such that cells with similar ids have similar preferred orientation angle. In this example, cells preferring ~0, ~180, and ~360 degrees are responding strongly to a horizontal drifting grating. (Bottom) somatic voltage traces and the corresponding calcium traces for example excitatory (red) and inhibitory (blue) cells.</p

    Computing extracellular potential.

    No full text
    <p>(<b>A</b>) Schematic of the compartmental model of a cell in relationship to the recording electrode. The calculation of the extracellular potential involves computing the transfer resistances <i>R<sub>mn</sub></i> between each n-th dendritic segment and m-th recording site on the electrode. (<b>B</b>) Extracellular spike “signatures” of individual cells recorded on the mesh electrode (black dots), using two single-cell models from the layer 4 network model as examples: PV2 (left) and Nr5a1 (right). (<b>C</b>) Modeled extracellular recordings with the linear electrode positioned along the axis of the cylinder in the layer 4 model (left). Extracellular potential responses (right) show all simulated data (color map) as well as from six select channels (black traces superimposed on the color map).</p

    Computational performance.

    No full text
    <p>(<b>A</b>) Scaling of wall time duration (normalized by the duration on a single CPU core) with the number of CPU cores for the simulation set up (blue circles) and run (red circles) of the layer 4 model (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0201630#pone.0201630.g005" target="_blank">Fig 5</a>). The ideal scaling is indicated by the dashed line. (<b>B</b>) Wall time increase when computing the extracellular potential for both set up (blue circles) and run (red circles) durations. (<b>C</b>) Scaling of the wall time with the simulated time for a long simulation. The non-ideal scaling with the increase in the number of cores corresponds to the deviations from the dashed line in (A).</p

    Running simulations.

    No full text
    <p>(<b>A</b>) Relationships among various elements involved in running simulations with BioNet. The pre-built network (blue), is passed to the main Python script (pink) that loads custom user modules and runs BioNet/NEURON to produce the simulation output (purple). (<b>B</b>) The stages of the simulation executed by the main Python script. (<b>C</b>) Algorithm for distributing the cells over a parallel architecture. This simple example shows 10 cells distributed across 4 parallel processes (typically each parallel process corresponds to a CPU core). Cells are assigned to each process in turn (a “round-robin” assignment).</p

    Building networks.

    No full text
    <p>(<b>A</b>) High-level specification of a simple example network (left) and corresponding builder API commands (right). The model is composed of two cell types: inhibitory (blue) and excitatory (red), which exchange connections both across and between the cell types. The API commands define the number of cells of each type to be created, connectivity rule (con_func) to use and associated parameters (con_func_params) as well as additional edge parameters (edge_type_params). (<b>B</b>) Illustration of creating cells (left) where each cell type may include both biophysical (morphological reconstruction) and LIF models (circles). The corresponding API commands for adding nodes for the biophysically detailed subset of excitatory populations are illustrated on the right. Here we specify the number of nodes to be created (N), a type of a model (model_type), the dynamical cell models (model_template) and the corresponding model parameters (dynamics_params), morphologies (morphology_file), and positions of cell somata (positions) that were computed with a user-defined function. (<b>C</b>) Illustration of connecting the cells into a network (left) and the corresponding API commands for adding a particular subset of connections (right). Here, the cells satisfying the query for both the source and target nodes will be connected using a function (connection_rule) with parameters (connection_params). The additional edge_type attributes are shared across the added edges and include the synaptic strength (syn_weight), function modulating synaptic strength (weight_function), dynamical synaptic model (model_template) and corresponding parameters (dynamics_params), a conduction delay (delay), as well as the locations where synapses could be placed on a cell (target_sections, distance_range).</p
    corecore